Stop wasting your time on win themes
Wait, what? But ‘best practice’ says…
Yes, for a half century the industry dogma has been that win themes are the essential weapon in competitive bidding. I call BS.
Practitioners will tell you they’re essential. Various books, boks, and blogs will give you entire chapters on them. Contractors will charge you handsomely to lock your team in a workshop until you’ve hammered out a pithy sentence to satisfy internal governance demands. And yet… in the public sector context, they generally end up being a colossal time sink with no proven impact on your score.
The problem
Let’s be honest: creating win themes is like pulling teeth. It’s the part of the bid process where energy drains out of the room. People fidget. Someone starts muttering about “key differentiators” until someone else smugly states they should be “discriminators”, and the rest ponder lunch options. The painful reality is that there are no blue-water discriminators because we’re working in mature industries that have good solutions to the customer problems. Even brand colours are mostly the same.
In our hypothetical win strategy room, the depressed facilitator drags everyone back to ‘Know Your Customer’ and predictably discovers no one is entirely sure who the customer team marking this will be. Actions are jotted down to ‘find out’. No one will. The truth is, most bid teams looking at the controlled environment of government procurement don’t have the information to make win themes genuinely distinctive. You don’t know enough about the customer’s internal politics, who they have on the evaluator team, or exactly how your competitors are shaping their offers. Without that, your “memorable message” becomes a beige, interchangeable slogan that could have been lifted from any rival’s boilerplate. I’ve even heard that we shouldn’t worry about the evaluator because they’re not the ‘real’ customer writing the requirements or delivering the services…which is interesting because they’re not the ones marking your homework…the evaluator is.
Back to our hypothetical room. Eventually someone gets annoyed enough to say, “what about ‘Trusted Partner. Proven Results. Safe Pair of Hands’?”. Another suggests “your vision, our mission” and everyone decides that’s the winner because no one has the strength to argue. The Bid Manager is tasked with fleshing this out using generative AI and the garbled transcript of the meeting.
The only joy comes when someone goes full dad-joke and delivers a pun that makes the team laugh. You feel like you’ve had a burst of creativity, maybe even get it printed on a mug. But the evaluator (a trained subject matter expert with a scoring sheet) isn’t awarding points for wordplay (tragically – they should IMHO*). They’re scoring evidence against published criteria. They’re not scoring quality relatively against your competition – laying out all the proposals that have a similar solution and saying, ‘I like that one more because they have a super-compelling win theme’. They’re not allowed to.
————————-
* If you’ve just clocked the m-dash and thought OMG itz the AIz – it’s not. I’ve used them for decades and love them and hug them and I’m keeping them. So there. When Chat learns ellipsis I’m going to stop writing and open that bar I’ve been hankering after for years.
Why we still do it
The reason is depressingly simple: because “best practice” says so. And where did that come from? A group of bid-writing practitioners, mostly in the US federal or other western private sector contracting world, decided decades ago that it was a good idea. No one asked for proof.
If you challenge it, the faithful have a ready list of reasons. They’ll say win themes make your differentiators stick in an evaluator’s mind. They’ll tell you it helps keep the team aligned. They’ll offer anecdotes about bids they won with win themes and bids they lost without them. They might even wheel out some behavioural psychology – Cialdini on persuasion, or research on how repeated messages improve recall. All of which may have some relevance in marketing or commercial proposal environments. But…
It all sounds convincing until you remember two things: none of it is public procurement-specific, and evaluators here aren’t scoring you based on recall or sentiment. They’re working to a legal framework that forces them to assess what’s on the page, against the criteria, with evidence. Your clever rhyme isn’t getting a mark.
Where’s the evidence?
There isn’t any. Not for UK public sector bids. We can’t find any peer-reviewed studies showing that win themes improve evaluation scores under our rules. The “proof” is almost always anecdote, unchallenged non-science, or a study-of-one (themselves).
Yes, there’s marketing research showing that memorable messages stick. Yes, behavioural economics tells us people can be influenced by framing. But public procurement is deliberately designed to strip those influences away. Subject matter experts (evaluators must be SMEs) score you against published criteria. We know the reality is human, but when we are resource and time constrained would you rather spend precious time on win themes or incrementally increasing your scores?
What the law and research say actually matters
Public procurement evaluation in the UK is not about what feels more persuasive than another proposal; it’s about what can be independently scored, defended, and audited. The Public Contracts Regulations 2015 and PA23 etc. require evaluation against the published award criteria only. HM Treasury’s Green Book sets the expectation for transparent, evidence-based decisions. Case law backs this up. In Bechtel v HS2 Ltd (2021), the court recognised the “competence of evaluators”, and explained how they are not comparing bids (so why do we fixate on the whole ‘minimise your weaknesses and emphasise the competitions etc.’ rigmarole?):
“This is not a comparative exercise… The evaluators for the questions in the Technical Envelope were not supposed to compare the answers of Bidder X and Bidder Y (and Bidder Z), decide which was “better”, and then rank them in some way. They were not supposed to compare the answers at all. The answer of Bidder X to each Technical Question was to be considered independently of the answers of Bidder Y. If an answer gave the assessors a certain level of confidence, they would be scored (for example) with Good Confidence, and that bidder would receive the appropriate score towards its total.”
The findings also cover the reality that evaluators make subjective calls, but critically this is not from compelling language but expertise and experience, “whether the answer to the question demonstrates or represents either a “substantial risk” or a “significant risk” to HS2, this is a subjective judgement call for the evaluators. They will reach their answer based on their expertise and experience.” So yes, evaluators disagree on their initial draft scores, and then discuss in moderation to get to a consensus score – how on earth will they justify being swayed by a message that is not part of the ITT or criteria?
In Bromcom v United Learning (2022), the court found averaging scores without proper moderation was unlawful, and reiterated that marks must be justified against the criteria.
Research tells a similar story. Dekel & Schurr’s 2014 study with real procurement evaluators showed they generally stick to criteria unless bias is introduced structurally (for example, seeing price before scoring quality). Bergman & Lundberg’s analysis of EU procurement scoring found that outcomes are driven by how criteria are defined and the evidence bidders provide.
Where exactly do your win themes fit in this process?
What evaluators actually look for
Government guidance and evaluator interviews paint a consistent picture. Evaluators want to see that you understand the problem or challenge in their terms, because understanding the issues means you have a fighting chance of delivering. They want confidence you can deliver in a credible, tailored solution with the right people, processes, tools, and tech. They look for evidence. They’re open to added value, but only if it’s relevant to their needs.
There’s no scoring box for “win theme came through loud and clear and was super-differentiating so how could I possibly pick the competition, despite the fact I’m not supposed to evaluate quality relatively, but hey”.
The counter argument: but they align teams
True – internal alignment matters. But there are better tools for it: a solution on a page (SOAP), a pre-mortem, even a well-run storyboard process. These keep authors coherent while also feeding directly into solution and evidence-based answers that map to the scoring.
If your team needs a rhyming couplet to remember what to write, your problem isn’t the absence of win themes, it’s the absence of a structured, criteria-driven solutioning and writing process.
Ask anyone who has been tasked with shoving win themes into responses at the eleventh hour how good win themes are at aligning authors who will ‘weave the golden thread through’ their carefully crafted response…
The alternative
What’s the one glaring omission from ‘best practice’ and even the APMP BoK? Bizarrely, the solution barely gets a mention. It’s something those annoying solution people do when they are not writing. It’s the oddest blind spot.
We need to bring the solution back into the heart of the bid. We need to place that solution back into the context of the customer’s need. We need to develop a solution strategy and solution themes.
What’s a solution strategy? It’s working out what the customer wants, and building it for them.
Say a customer needs to rapidly modernise their IT and services in the face of modern end user expectations on what IT can do and what good service means. We then understand what we can about what that need is and what’s driving it, and where they are now. We make solution decisions on how we move them from where they are now to the future they need – so rather than a bomb-proof waterfall project, we’ll go for the hybrid agile model we know worked with this customer before; rather than a human-heavy service desk, we’ll use our new agentic solutions we’ve developed. Those decisions are around what technical or delivery approach options we have in our kit bag (to be explicit: I’m including pricing and commercial decisions in that solution because they are intrinsic parts of a whole). Exactly HOW are you going to make things better/bigger/faster/cheaper/less risky/etc.?
At the end of this process we’ve got a solution that’s complete (because we’re systematically looking across their needs and how we can deliver them), coherent (because everyone is in the room), and hopefully compelling (because it’s solving their problems in a way that is tailored to them and maybe adds some value).
But that’s a Win Strategy I hear you cry…no, it’s not. Win Strategies obsess over people not involved in evaluation that we’ve guessed the identity of, and the competition in an abstract way because tbh we’ve no idea what they’re doing. Great way to spend a wet Wednesday or fill out a deck that governance has demanded, but…
What’s a solution theme? It’s telling the customer how we can deliver something that will solve their specific problem because we’ve built the solution they want.
So in our example above, we’re not going to spend time working out how we emphasise our strengths, minimise the competitor strengths, and all the rest of it, and come up with “so you can grow as per your annual report, you’ll get a modern CX through our proprietary AI-enabled decisions in the service desk, that we have successfully delivered to the Department of Random” (which, by the way, your competitors will say as well because they’re following the same hackneyed process).
Instead, we need to work out what a modern CX needs to be for this customer with the types of end user they have and the contact volumes they experience. We’ll analyse the data they’ve provided in the murky data room. We’ll work out what we can deliver to solve that problem at a cost (and pricing mechanism) the customer can buy. And we’ll write down that ‘to deliver the modern CX their customers want, we’ve analysed xxxx data, built xxxx user persona, to see these xxxx results. Given this we’ll incrementally implement some agentic solutions that are built with the accessibility requirements x% of your users need, etc etc.’ We show the understanding. We show the thinking behind the solution (the why) so they really believe we can deliver the requirements, so they score us well. We provide specific evidence to back it up. We’re clear on who is delivering what, when and where. We provide a real ‘HOW’ that they have asked for.
Down at this level we can write (or speak or demonstrate) a solution that is ‘complete, coherent, and hopefully compelling’ from the strategy work, and now compliant (we’re looking at the requirements and information provided for once), clear (we get marks for ‘answering the requirement fully’), credible (we get marks for ‘confidence in delivery’), and convincing (because we’ve provided the evidence in some form that they need to tick that box).
Solution themes align with what the law, government guidance, and research say matters: quality of solution, relevance, and proof.
You’ve got this far Well done!
Dump the “win theme” dogma. It’s a legacy import from an old procurement world, unproven in the UK public sector of today, and irrelevant to the way evaluations are actually run. Build a solution strategy and solution themes instead; grounded in the problem, the published criteria, and what your evaluators will actually score.
You do still need a SOAP. But that’s for another post…